414 research outputs found
Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems
We consider projection algorithms for solving (nonconvex) feasibility
problems in Euclidean spaces. Of special interest are the Method of Alternating
Projections (MAP) and the Douglas-Rachford or Averaged Alternating Reflection
Algorithm (AAR). In the case of convex feasibility, firm nonexpansiveness of
projection mappings is a global property that yields global convergence of MAP
and for consistent problems AAR. Based on (\epsilon, \delta)-regularity of sets
developed by Bauschke, Luke, Phan and Wang in 2012, a relaxed local version of
firm nonexpansiveness with respect to the intersection is introduced for
consistent feasibility problems. Together with a coercivity condition that
relates to the regularity of the intersection, this yields local linear
convergence of MAP for a wide class of nonconvex problems,Comment: 22 pages, no figures, 30 reference
Alternating Projections and Douglas-Rachford for Sparse Affine Feasibility
The problem of finding a vector with the fewest nonzero elements that
satisfies an underdetermined system of linear equations is an NP-complete
problem that is typically solved numerically via convex heuristics or
nicely-behaved nonconvex relaxations. In this work we consider elementary
methods based on projections for solving a sparse feasibility problem without
employing convex heuristics. In a recent paper Bauschke, Luke, Phan and Wang
(2014) showed that, locally, the fundamental method of alternating projections
must converge linearly to a solution to the sparse feasibility problem with an
affine constraint. In this paper we apply different analytical tools that allow
us to show global linear convergence of alternating projections under familiar
constraint qualifications. These analytical tools can also be applied to other
algorithms. This is demonstrated with the prominent Douglas-Rachford algorithm
where we establish local linear convergence of this method applied to the
sparse affine feasibility problem.Comment: 29 pages, 2 figures, 37 references. Much expanded version from last
submission. Title changed to reflect new development
Convergence in Distribution of Randomized Algorithms: The Case of Partially Separable Optimization
We present a Markov-chain analysis of blockwise-stochastic algorithms for
solving partially block-separable optimization problems. Our main contributions
to the extensive literature on these methods are statements about the Markov
operators and distributions behind the iterates of stochastic algorithms, and
in particular the regularity of Markov operators and rates of convergence of
the distributions of the corresponding Markov chains. This provides a detailed
characterization of the moments of the sequences beyond just the expected
behavior. This also serves as a case study of how randomization restores
favorable properties to algorithms that iterations of only partial information
destroys. We demonstrate this on stochastic blockwise implementations of the
forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a
special case, convex), nonsmooth optimization.Comment: 25 pages, 43 reference
- …